Close

%0 Conference Proceedings
%4 sid.inpe.br/sibgrapi/2019/09.15.02.10
%2 sid.inpe.br/sibgrapi/2019/09.15.02.10.34
%@doi 10.1109/SIBGRAPI.2019.00043
%T Dynamic Sign Language Recognition Based on Convolutional Neural Networks and Texture Maps
%D 2019
%A Cardenas, Edwin Jonathan Escobedo,
%A Cerna, Lourdes Ramirez,
%A Chavez, Guillermo Camara,
%@affiliation Federal University of Ouro Preto
%@affiliation National University of Ouro Preto
%@affiliation Federal University of Ouro Preto
%E Oliveira, Luciano Rebouças de,
%E Sarder, Pinaki,
%E Lage, Marcos,
%E Sadlo, Filip,
%B Conference on Graphics, Patterns and Images, 32 (SIBGRAPI)
%C Rio de Janeiro, RJ, Brazil
%8 28-31 Oct. 2019
%I IEEE Computer Society
%J Los Alamitos
%S Proceedings
%K CNN, sign language, texture maps.
%X Sign language recognition (SLR) is a very challenging task due to the complexity of learning or developing descriptors to represent its primary parameters (location, movement, and hand configuration). In this paper, we propose a robust deep learning based method for sign language recognition. Our approach represents multimodal information (RGB-D) through texture maps to describe the hand location and movement. Moreover, we introduce an intuitive method to extract a representative frame that describes the hand shape. Next, we use this information as inputs to two three-stream and two-stream CNN models to learn robust features capable of recognizing a dynamic sign. We conduct our experiments on two sign language datasets, and the comparison with state-of-the-art SLR methods reveal the superiority of our approach which optimally combines texture maps and hand shape for SLR tasks.
%@language en
%3 PID111.pdf


Close